Goto

Collaborating Authors

 vio system


MLINE-VINS: Robust Monocular Visual-Inertial SLAM With Flow Manhattan and Line Features

Ye, Chao, Li, Haoyuan, Lin, Weiyang, Yang, Xianqiang

arXiv.org Artificial Intelligence

--In this paper we introduce MLINE-VINS, a novel monocular visual-inertial odometry (VIO) system that leverages line features and Manhattan Word assumption. Specifically, for line matching process, we propose a novel geometric line optical flow algorithm that efficiently tracks line features with varying lengths, whitch is do not require detections and descriptors in every frame. T o address the instability of Manhattan estimation from line features, we propose a tracking-by-detection module that consistently tracks and optimizes Manhattan framse in consecutive images. By aligning the Manhattan World with the VIO world frame, the tracking could restart using the latest pose from back-end, simplifying the coordinate transformations within the system. Furthermore, we implement a mechanism to validate Manhattan frames and a novel global structural constraints back-end optimization. Extensive experiments results on vairous datasets, including benchmark and self-collected datasets, show that the proposed approach outperforms existing methods in terms of accuracy and long-range robustness. CCURACY of pose estimation is a critical factor in various fields, such as autonomous driving, augmented reality, and robotics. Simultaneous localization and mapping (SLAM) has proven to be an effective approach to address this challenge [1], [2]. Among SLAM techniques, visual-inertial odometry (VIO) is particularly popular due to its cost-effectiveness, accuracy, and robustness. In VIO, point feature is widely used for camera pose estimation due to its simplicity and efficiency. Representative point-based VIO systems include MSCKF-VIO [3], OK-VINS [4] and VINS-MONO [5], with VINS-MONO being one of the most widely adopted algorithm. However, the performance of point-based VIO is affected by the number and spatial distribution of points and it significantly hindered in textureless environments, where the lack of texture leads to point loss. To address these limitations, line features are increasingly considered as a valuable complement to point features improving the robustness of VIO systems. Line features arecommonly found in low-texture environments, particularly in man-made environments [6].


Observability Investigation for Rotational Calibration of (Global-pose aided) VIO under Straight Line Motion

Song, Junlin, Richard, Antoine, Olivares-Mendez, Miguel

arXiv.org Artificial Intelligence

Online extrinsic calibration is crucial for building "power-on-and-go" moving platforms, like robots and AR devices. However, blindly performing online calibration for unobservable parameter may lead to unpredictable results. In the literature, extensive studies have been conducted on the extrinsic calibration between IMU and camera, from theory to practice. It is well-known that the observability of extrinsic parameter can be guaranteed under sufficient motion excitation. Furthermore, the impacts of degenerate motions are also investigated. Despite these successful analyses, we identify an issue regarding the existing observability conclusion. This paper focuses on the observability investigation for straight line motion, which is a common-seen and fundamental degenerate motion in applications. We analytically prove that pure translational straight line motion can lead to the unobservability of the rotational extrinsic parameter between IMU and camera (at least one degree of freedom). By correcting observability conclusion, our novel theoretical finding disseminate more precise principle to the research community and provide explainable calibration guideline for practitioners. Our analysis is validated by rigorous theory and experiments.


Adaptive VIO: Deep Visual-Inertial Odometry with Online Continual Learning

Pan, Youqi, Zhou, Wugen, Cao, Yingdian, Zha, Hongbin

arXiv.org Artificial Intelligence

Visual-inertial odometry (VIO) has demonstrated remarkable success due to its low-cost and complementary sensors. However, existing VIO methods lack the generalization ability to adjust to different environments and sensor attributes. In this paper, we propose Adaptive VIO, a new monocular visual-inertial odometry that combines online continual learning with traditional nonlinear optimization. Adaptive VIO comprises two networks to predict visual correspondence and IMU bias. Unlike end-to-end approaches that use networks to fuse the features from two modalities (camera and IMU) and predict poses directly, we combine neural networks with visual-inertial bundle adjustment in our VIO system. The optimized estimates will be fed back to the visual and IMU bias networks, refining the networks in a self-supervised manner. Such a learning-optimization-combined framework and feedback mechanism enable the system to perform online continual learning. Experiments demonstrate that our Adaptive VIO manifests adaptive capability on EuRoC and TUM-VI datasets. The overall performance exceeds the currently known learning-based VIO methods and is comparable to the state-of-the-art optimization-based methods.


RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented Reality in Dynamic Environments

Li, Jinyu, Pan, Xiaokun, Huang, Gan, Zhang, Ziyang, Wang, Nan, Bao, Hujun, Zhang, Guofeng

arXiv.org Artificial Intelligence

It is typically challenging for visual or visual-inertial odometry systems to handle the problems of dynamic scenes and pure rotation. In this work, we design a novel visual-inertial odometry (VIO) system called RD-VIO to handle both of these two problems. Firstly, we propose an IMU-PARSAC algorithm which can robustly detect and match keypoints in a two-stage process. In the first state, landmarks are matched with new keypoints using visual and IMU measurements. We collect statistical information from the matching and then guide the intra-keypoint matching in the second stage. Secondly, to handle the problem of pure rotation, we detect the motion type and adapt the deferred-triangulation technique during the data-association process. We make the pure-rotational frames into the special subframes. When solving the visual-inertial bundle adjustment, they provide additional constraints to the pure-rotational motion. We evaluate the proposed VIO system on public datasets. Experiments show the proposed RD-VIO has obvious advantages over other methods in dynamic environments.


HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO

Cioffi, Giovanni, Bauersfeld, Leonard, Scaramuzza, Davide

arXiv.org Artificial Intelligence

Visual-inertial odometry (VIO) is the most common approach for estimating the state of autonomous micro aerial vehicles using only onboard sensors. Existing methods improve VIO performance by including a dynamics model in the estimation pipeline. However, such methods degrade in the presence of low-fidelity vehicle models and continuous external disturbances, such as wind. Our proposed method, HDVIO, overcomes these limitations by using a hybrid dynamics model that combines a point-mass vehicle model with a learning-based component that captures complex aerodynamic effects. HDVIO estimates the external force and the full robot state by leveraging the discrepancy between the actual motion and the predicted motion of the hybrid dynamics model. Our hybrid dynamics model uses a history of thrust and IMU measurements to predict the vehicle dynamics. To demonstrate the performance of our method, we present results on both public and novel drone dynamics datasets and show real-world experiments of a quadrotor flying in strong winds up to 25 km/h. The results show that our approach improves the motion and external force estimation compared to the state-of-the-art by up to 33% and 40%, respectively. Furthermore, differently from existing methods, we show that it is possible to predict the vehicle dynamics accurately while having no explicit knowledge of its full state.


Experimental Evaluation of Visual-Inertial Odometry Systems for Arable Farming

Cremona, Javier, Comelli, Román, Pire, Taihú

arXiv.org Artificial Intelligence

The farming industry constantly seeks the automation of different processes involved in agricultural production, such as sowing, harvesting and weed control. The use of mobile autonomous robots to perform those tasks is of great interest. Arable lands present hard challenges for Simultaneous Localization and Mapping (SLAM) systems, key for mobile robotics, given the visual difficulty due to the highly repetitive scene and the crop leaves movement caused by the wind. In recent years, several Visual-Inertial Odometry (VIO) and SLAM systems have been developed. They have proved to be robust and capable of achieving high accuracy in indoor and outdoor urban environments. However, they were not properly assessed in agricultural fields. In this work we assess the most relevant state-of-the-art VIO systems in terms of accuracy and processing time on arable lands in order to better understand how they behave on these environments. In particular, the evaluation is carried out on a collection of sensor data recorded by our wheeled robot in a soybean field, which was publicly released as the Rosario Dataset. The evaluation shows that the highly repetitive appearance of the environment, the strong vibration produced by the rough terrain and the movement of the leaves caused by the wind, expose the limitations of the current state-of-the-art VIO and SLAM systems. We analyze the systems failures and highlight the observed drawbacks, including initialization failures, tracking loss and sensitivity to IMU saturation. Finally, we conclude that even though certain systems like ORB-SLAM3 and S-MSCKF show good results with respect to others, more improvements should be done to make them reliable in agricultural fields for certain applications such as soil tillage of crop rows and pesticide spraying.


An Empirical Evaluation of Four Off-the-Shelf Proprietary Visual-Inertial Odometry Systems

Kim, Jungha, Song, Minkyeong, Lee, Yeoeun, Jung, Moonkyeong, Kim, Pyojin

arXiv.org Artificial Intelligence

HIS article presents a benchmark comparison of off-theshelf proprietary visual-inertial odometry (VIO) systems in six challenging real-world environments, both indoors and used for autonomous navigation of robotic applications, which outdoors. Especially, we select the following four proprietary are the process of determining the position and orientation of VIO systems that are frequently used in autonomous driving a camera-inertial measurement unit (IMU)-rig in 3D space by robotic applications: analyzing the associated camera images and IMU data. As Apple ARKit [4] - Apple's augmented reality (AR) platform, the VIO research has reached a level of maturity, there exist which includes filtering-based VIO algorithms [8] several open published VIO methods such as MSCKF [1], to enable iOS devices to sense how they move in 3D OKVIS [2], VINS-Mono [3], and many commercial products space.